Goto

Collaborating Authors

 role model


Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models

Piedrahita, David Guzman, Strauss, Irene, Schölkopf, Bernhard, Mihalcea, Rada, Jin, Zhijing

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) become increasingly integrated into everyday life and information ecosystems, concerns about their implicit biases continue to persist. While prior work has primarily examined socio-demographic and left--right political dimensions, little attention has been paid to how LLMs align with broader geopolitical value systems, particularly the democracy--authoritarianism spectrum. In this paper, we propose a novel methodology to assess such alignment, combining (1) the F-scale, a psychometric tool for measuring authoritarian tendencies, (2) FavScore, a newly introduced metric for evaluating model favorability toward world leaders, and (3) role-model probing to assess which figures are cited as general role-models by LLMs. We find that LLMs generally favor democratic values and leaders, but exhibit increased favorability toward authoritarian figures when prompted in Mandarin. Further, models are found to often cite authoritarian figures as role models, even outside explicit political contexts. These results shed light on ways LLMs may reflect and potentially reinforce global political ideologies, highlighting the importance of evaluating bias beyond conventional socio-political axes. Our code is available at: https://github.com/irenestrauss/Democratic-Authoritarian-Bias-LLMs.


Young birds get by with a little help from their…siblings

Popular Science

Parents are not the only ones who teach important survival skills. Breakthroughs, discoveries, and DIY tips sent every weekday. These special relationships can be filled with everything from fun and joy to cruel pranks and teasing. Witnessing each other's childhoods and sharing parents along with family secrets and advice makes it a relationship that is truly unlike any other. This bond is also not unique to our species, according to a new study published today in the journal .


Long-Term Fairness in Sequential Multi-Agent Selection with Positive Reinforcement

Puranik, Bhagyashree, Guldogan, Ozgur, Madhow, Upamanyu, Pedarsani, Ramtin

arXiv.org Machine Learning

While much of the rapidly growing literature on fair decision-making focuses on metrics for one-shot decisions, recent work has raised the intriguing possibility of designing sequential decision-making to positively impact long-term social fairness. In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback that increases the pool of under-represented applicants in future selection rounds, thus enhancing fairness in the long term. In this paper, we examine this hypothesis and its consequences in a setting in which multiple agents are selecting from a common pool of applicants. We propose the Multi-agent Fair-Greedy policy, that balances greedy score maximization and fairness. Under this policy, we prove that the resource pool and the admissions converge to a long-term fairness target set by the agents when the score distributions across the groups in the population are identical. We provide empirical evidence of existence of equilibria under non-identical score distributions through synthetic and adapted real-world datasets. We then sound a cautionary note for more complex applicant pool evolution models, under which uncoordinated behavior by the agents can cause negative reinforcement, leading to a reduction in the fraction of under-represented applicants. Our results indicate that, while positive reinforcement is a promising mechanism for long-term fairness, policies must be designed carefully to be robust to variations in the evolution model, with a number of open issues that remain to be explored by algorithm designers, social scientists, and policymakers.


The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills

Zheng, Qingxiao, Huang, Yun

arXiv.org Artificial Intelligence

This study explores the impact of AI-generated digital self-clones on improving online presentation skills. We carried out a mixed-design experiment involving 44 international students, comparing self-recorded videos (control) with self-clone videos (AI group) for English presentation practice. The AI videos utilized voice cloning, face swapping, lip-sync, and body-language simulation to refine participants' original presentations in terms of repetition, filler words, and pronunciation. Machine-rated scores indicated enhancements in speech performance for both groups. Though the groups didn't significantly differ, the AI group exhibited a heightened depth of reflection, self-compassion, and a meaningful transition from a corrective to an enhancive approach to self-critique. Within the AI group, congruence between self-perception and AI self-clones resulted in diminished speech anxiety and increased enjoyment. Our findings recommend the ethical employment of digital self-clones to enhance the emotional and cognitive facets of skill development.


Speaker attribution in German parliamentary debates with QLoRA-adapted large language models

Bornheim, Tobias, Grieger, Niklas, Blaneck, Patrick Gustav, Bialonski, Stephan

arXiv.org Artificial Intelligence

The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.


America's veterans can inspire the next generation to serve

FOX News

Fox News senior national security correspondent Jennifer Griffin has the latest on why the Army is expected to fall short of its 2023 goal on'America Reports.' Since the first brave Americans took up arms to claim their freedom in the War of Independence, our proud military tradition has sustained our nation and kept us safe. Today, by some accounts, this tradition is in danger of dying out. Some of the loudest alarm bells are coming from the armed forces themselves. The Wall Street Journal reports that most branches expect to miss their recruitment targets this year by significant margins – 15,000 for the Army, 10,000 for the Navy and 3,000 for the Air Force. The Marine Corps is on track to meet its quota, but Marine officials still described a "challenging" recruiting climate.


The five things you should never do in your career

FOX News

People in Texas sounded off on AI job displacement, with half of people who spoke to Fox News convinced that the tech will rob them of work. America is at a once-in-a-generation turning point around work: 70% of us are unhappy with what we do; three-quarters of us say we plan to look for new work this year. Altogether, 100 million Americans will sit down with someone they love this year and say, "I'm not happy with what I'm doing and want to do work that makes me happy." The problem: Most of the advice we receive around work is outdated, misguided or flat wrong. I've spent the last six years crisscrossing the country collecting hundreds of stories of Americans who made enormous changes in their work lives.

  Country: North America > United States > Texas (0.25)
  Industry: Media > News (0.36)

(Machine) Learning to Be Like Thee? For Algorithm Education, Not Training

Blazquez, Susana Perez, Hipolito, Inas

arXiv.org Artificial Intelligence

This paper argues that Machine Learning (ML) algorithms must be educated. ML-trained algorithms' moral decisions are ubiquitous in human society. Sometimes reverting the societal advances governments, NGOs and civil society have achieved with great effort in the last decades or are yet on the path to be achieved. While their decisions have an incommensurable impact on human societies, these algorithms are within the least educated agents known (data incomplete, un-inclusive, or biased). ML algorithms are not something separate from our human idiosyncrasy but an enactment of our most implicit prejudices and biases. Some research is devoted to "responsibility assignment" as a strategy to tackle immoral AI behaviour. Yet this paper argues that the solution for AI ethical decision-making resides in algorithm education" (as opposed to the "training") of ML. Drawing from an analogy between ML and child education for social responsibility, the paper offers clear directions for responsible and sustainable AI design, specifically with respect to how to educate algorithms to decide ethically.


World Economic Forum chair Klaus Schwab declares on Chinese state TV: 'China is a model for many nations'

FOX News

Center for American Security's Fred Fleitz unpacks the national security risks posed by China's access to TikTok data and Chinese-made drones flying over Washington D.C. World Economic Forum founder and Chair Klaus Schwab recently sat down for an interview with a Chinese state media outlet and proclaimed that China was a "role model" for other nations. Schwab, 84, made these comments during an interview with CGTN's Tian Wei on the sidelines of last week's APEC CEO Summit in Bangkok, Thailand. Schwab said he respected China's "tremendous" achievements at modernizing its economy over the last 40 years. FILE: World Economic Forum (WEF) founder and Executive Chairman Klaus Schwab sits, as German Chancellor Olaf Scholz (not pictured) addresses the delegates, during the last day of the WEF in Davos, Switzerland May 26, 2022. "I think it's a role model for many countries," Schwab said, before qualifying that he thinks each country should make its own decisions about what system it wants to adapt.


50 women in robotics you need to know about 2022

Robohub

Our Women in Robotics list turns 10 this year and we are delighted to introduce you to another amazing "50 women in robotics you need to know about" as we also celebrate Ada Lovelace Day. We have now profiled more than 300 women AND non-binary people making important contributions to robotics since the list began in 2013. This year our 50 come from robotics companies (small and large), self-driving car companies, governments, research organizations and the media. The list covers the globe, with the chosen ones having nationalities from the EU, UK, USA, Australia, China, Turkey, India and Kenya. A number of women come from influential companies that are household names such as NASA, ABB, GE, Toyota and the Wall Street Journal.